DiscoverAI Security OpsThe Hallucination Problem | Episode 20
The Hallucination Problem | Episode 20

The Hallucination Problem | Episode 20

Update: 2025-09-11
Share

Description

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – 

https://poweredbybhis.com


Episode 20 - The Hallucination Problem


In this episode of AI Security Ops, Joff Thyer and Brian Fehrman from Black Hills Information Security dive into the hallucination problem in AI large language models and generative AI. 


They explain what hallucinations are, why they happen, and the risks they create in real-world AI deployments. The discussion covers security implications, practical examples, and strategies organizations can use to mitigate these issues through stronger design, monitoring, and testing. 


A must-watch for cybersecurity professionals, AI researchers, and anyone curious about the limitations and challenges of modern AI systems.


----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

The Hallucination Problem | Episode 20

The Hallucination Problem | Episode 20

Black Hills Information Security